# Efficient and Lightweight
Depth
Depth Anything V2 is currently the most powerful monocular depth estimation model, trained on 595,000 synthetically annotated images and over 62 million real unlabeled images, offering finer details and stronger robustness.
3D Vision
Transformers

D
scenario-labs
75
0
Depth Anything V2 Small Hf
Apache-2.0
Depth Anything V2 is currently the most powerful monocular depth estimation model, trained on 595,000 synthetically annotated images and over 62 million real unlabeled images, featuring fine details and robustness.
3D Vision
Transformers

D
depth-anything
438.72k
15
Sentence Transformers All Mini Lm L6 V2
Apache-2.0
A lightweight sentence embedding model optimized based on the MiniLM architecture, specifically designed for efficient sentence similarity calculation
Text Embedding English
S
danielpark
78
1
Featured Recommended AI Models